The foundation models have recently shown excellent performance on a variety of downstream tasks in computer vision. However, most existing vision foundation models simply focus on image-level pretraining and adpation, which are limited for dynamic and complex video-level understanding tasks. To fill the gap, we present general video foundation models, InternVideo, by taking advantage of both generative and discriminative self-supervised video learning. Specifically, InternVideo efficiently explores masked video modeling and video-language contrastive learning as the pretraining objectives, and selectively coordinates video representations of these two complementary frameworks in a learnable manner to boost various video applications. Without bells and whistles, InternVideo achieves state-of-the-art performance on 39 video datasets from extensive tasks including video action recognition/detection, video-language alignment, and open-world video applications. Especially, our methods can obtain 91.1% and 77.2% top-1 accuracy on the challenging Kinetics-400 and Something-Something V2 benchmarks, respectively. All of these results effectively show the generality of our InternVideo for video understanding. The code will be released at https://github.com/OpenGVLab/InternVideo .
translated by 谷歌翻译
Arbitrary style transfer (AST) transfers arbitrary artistic styles onto content images. Despite the recent rapid progress, existing AST methods are either incapable or too slow to run at ultra-resolutions (e.g., 4K) with limited resources, which heavily hinders their further applications. In this paper, we tackle this dilemma by learning a straightforward and lightweight model, dubbed MicroAST. The key insight is to completely abandon the use of cumbersome pre-trained Deep Convolutional Neural Networks (e.g., VGG) at inference. Instead, we design two micro encoders (content and style encoders) and one micro decoder for style transfer. The content encoder aims at extracting the main structure of the content image. The style encoder, coupled with a modulator, encodes the style image into learnable dual-modulation signals that modulate both intermediate features and convolutional filters of the decoder, thus injecting more sophisticated and flexible style signals to guide the stylizations. In addition, to boost the ability of the style encoder to extract more distinct and representative style signals, we also introduce a new style signal contrastive loss in our model. Compared to the state of the art, our MicroAST not only produces visually superior results but also is 5-73 times smaller and 6-18 times faster, for the first time enabling super-fast (about 0.5 seconds) AST at 4K ultra-resolutions. Code is available at https://github.com/EndyWon/MicroAST.
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
We study a novel and important communication pattern in large-scale model-parallel deep learning (DL), which we call cross-mesh resharding. This pattern emerges when the two paradigms of model parallelism - intra-operator and inter-operator parallelism - are combined to support large models on large clusters. In cross-mesh resharding, a sharded tensor needs to be sent from a source device mesh to a destination device mesh, on which the tensor may be distributed with the same or different layouts. We formalize this as a many-to-many multicast communication problem, and show that existing approaches either are sub-optimal or do not generalize to different network topologies or tensor layouts, which result from different model architectures and parallelism strategies. We then propose two contributions to address cross-mesh resharding: an efficient broadcast-based communication system, and an "overlapping-friendly" pipeline schedule. On microbenchmarks, our overall system outperforms existing ones by up to 10x across various tensor and mesh layouts. On end-to-end training of two large models, GPT-3 and U-Transformer, we improve throughput by 10% and 50%, respectively.
translated by 谷歌翻译
离线增强学习吸引了人们对解决传统强化学习的应用挑战的极大兴趣。离线增强学习使用先前收集的数据集来训练代理而无需任何互动。为了解决对OOD的高估(分布式)动作的高估,保守的估计值对所有输入都具有较低的价值。以前的保守估计方法通常很难避免OOD作用对Q值估计的影响。此外,这些算法通常需要失去一些计算效率,以实现保守估计的目的。在本文中,我们提出了一种简单的保守估计方法,即双重保守估计(DCE),该方法使用两种保守估计方法来限制政策。我们的算法引入了V功能,以避免分发作用的错误,同时隐含得出保守的估计。此外,我们的算法使用可控的罚款术语,改变了培训中保守主义的程度。从理论上讲,我们说明了该方法如何影响OOD动作和分布动作的估计。我们的实验分别表明,两种保守的估计方法影响了所有国家行动的估计。 DCE展示了D4RL的最新性能。
translated by 谷歌翻译
增加片上光子神经网络(PNN)的层数对于改善其模型性能至关重要。但是,网络隐藏层的连续级联导致更大的集成光子芯片区域。为了解决此问题,我们提出了光学神经常规微分方程(ON-ON-ON-OD-ON-OD-ON-OD-ON-OD-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ODINE),该架构用光ODE求解器参数化了隐藏层的连续动力学。 On-Ode包括PNN,然后是光子积分器和光反馈回路,可以配置为代表残留的神经网络(RESNET)和复发性神经网络,并有效地降低了芯片面积占用率。对于基于干扰的光电非线性隐藏层,数值实验表明,单个隐藏层ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ONE表示与图像分类任务中的两层光学重新系统大致相同。此外,Onode提高了基于衍射的全光线性隐藏层的模型分类精度。 On-Eod的时间依赖性动力学属性进一步应用于高精度的轨迹预测。
translated by 谷歌翻译
我们提出了一种简单而有效的自我训练方法,称为Stad,用于低资源关系提取。该方法首先根据教师模型所预测的概率将自动注释的实例分为两组:自信实例和不确定实例。与大多数以前的研究相反,主要的研究主要仅利用自信实例进行自我训练,我们利用了不确定的实例。为此,我们提出了一种从不确定实例中识别模棱两可但有用的实例的方法,然后将关系分为每个模棱两可的实例中的候选标签集和负标签集。接下来,我们建议对模棱两可的实例的负标签集和对自信实例的积极培训方法提出一种设定的培训方法。最后,提出了一种联合培训方法来在所有数据上构建最终关系提取系统。在两个广泛使用的数据集SEMEVAL2010任务8上进行的实验结果和低资源设置的重新攻击表明,这种新的自我训练方法确实在与几个竞争性自我训练系统相比时确实取得了显着和一致的改进。代码可在https://github.com/jjyunlp/stad上公开获取
translated by 谷歌翻译
最近的研究表明,通用风格转移的成功取得了巨大的成功,将任意视觉样式转移到内容图像中。但是,现有的方法遭受了审美的非现实主义问题,该问题引入了不和谐的模式和明显的人工制品,从而使结果很容易从真实的绘画中发现。为了解决这一限制,我们提出了一种新颖的美学增强风格转移方法,可以在美学上为任意风格产生更现实和令人愉悦的结果。具体而言,我们的方法引入了一种审美歧视者,以从大量的艺术家创造的绘画中学习通用的人类自愿美学特征。然后,合并了美学特征,以通过新颖的美学感知样式(AESSA)模块来增强样式转移过程。这样的AESSA模块使我们的Aesust能够根据样式图像的全局美学通道分布和内容图像的局部语义空间分布有效而灵活地集成样式模式。此外,我们还开发了一种新的两阶段转移培训策略,并通过两种审美正规化来更有效地训练我们的模型,从而进一步改善风格化的性能。广泛的实验和用户研究表明,我们的方法比艺术的状态综合了美学上更加和谐和现实的结果,从而大大缩小了真正的艺术家创造的绘画的差异。我们的代码可在https://github.com/endywon/aesust上找到。
translated by 谷歌翻译
基于DNN的视频对象检测(VOD)为自动驾驶和视频监视行业提供了重要的重要性和有希望的机会。但是,由于其实用性,可行性和强大的攻击效果,对抗贴片攻击在现场视觉任务中产生了巨大的关注。这项工作提出了Themis,这是一种软件/硬件系统,可防止对抗贴片,以实时稳健的视频对象检测。我们观察到,对抗斑块在具有非稳定预测的小区域中表现出极为局部的表面特征,因此提出了对抗区域检测算法,以消除对抗性效应。Themis还提出了一种系统的设计,以通过消除冗余计算和记忆运输来有效地支持该算法。实验结果表明,提出的方法可以有效地从可忽略的硬件开销中从对抗性攻击中恢复系统。
translated by 谷歌翻译
基于3DCNN,ConvlSTM或光流的先前方法在视频显着对象检测(VSOD)方面取得了巨大成功。但是,它们仍然遭受高计算成本或产生的显着图质量较差的困扰。为了解决这些问题,我们设计了一个基于时空存储器(STM)网络,该网络从相邻帧中提取当前帧的有用时间信息作为VSOD的时间分支。此外,以前的方法仅考虑无时间关联的单帧预测。结果,模型可能无法充分关注时间信息。因此,我们最初将框架间的对象运动预测引入VSOD。我们的模型遵循标准编码器 - 编码器体系结构。在编码阶段,我们通过使用电流及其相邻帧的高级功能来生成高级的时间特征。这种方法比基于光流的方法更有效。在解码阶段,我们提出了一种有效的空间和时间分支融合策略。高级特征的语义信息用于融合低级特征中的对象细节,然后逐步获得时空特征以重建显着性图。此外,受图像显着对象检测(ISOD)中常用的边界监督的启发,我们设计了一种运动感知损失,用于预测对象边界运动,并同时对VSOD和对象运动预测执行多任务学习,这可以进一步促进模型以提取提取的模型时空特征准确并保持对象完整性。在几个数据集上进行的广泛实验证明了我们方法的有效性,并且可以在某些数据集上实现最新指标。所提出的模型不需要光流或其他预处理,并且在推理过程中可以达到近100 fps的速度。
translated by 谷歌翻译